Mental health disorders such as depression and anxiety are major global health concerns, yet many cases remain undetected in their early stages. With the widespread use of smartphones and digital platforms, individuals generate continuous behavioural data through daily activities such as communication, mobility, and online interactions. These digital behavioural signals can provide valuable insights into changes in mental well-being. This study explores the use of explainable artificial intelligence (XAI) to analyse digital behavioural data for the early detection of mental health risks. Data derived from smartphone usage patterns, activity levels, and textual content from online platforms can be examined using machine learning techniques to identify behavioural changes associated with psychological distress. Explainable AI methods are incorporated to ensure that the reasons behind model predictions are transparent and understandable. The research also emphasizes ethical considerations including user privacy, informed consent, and fairness in algorithmic analysis. Overall, this approach highlights the potential of combining digital behavioural signals with explainable AI to support early mental health screening while maintaining responsible and ethical use of technology.
Introduction
Mental health disorders such as depression, anxiety, bipolar disorder, PTSD, and schizophrenia are widespread and significantly impact global health, making early detection crucial. Traditional diagnostic methods rely on interviews and questionnaires, but modern digital technologies—like smartphones, wearables, and social media—enable continuous collection of behavioral data (e.g., activity levels, sleep patterns, location, and language use). These data act as digital biomarkers, forming a “digital phenotype” that can help infer mental health status.
The study highlights how AI and machine learning can analyze these behavioral signals to detect early signs of mental health issues. Research shows that data from social media, smartphone sensors, and wearables can effectively predict conditions like depression and anxiety, though results vary across studies and populations.
However, key challenges include data variability, privacy concerns, bias in datasets, and the need for explainable models that clinicians can trust. Ethical considerations—such as informed consent, data security, fairness, and human oversight—are essential for responsible implementation.
The proposed methodology involves collecting behavioral and self-reported data, preprocessing it, and applying machine learning models with explainable AI techniques to identify risk patterns. Results indicate that patterns like increased late-night phone use, reduced activity, and negative language are associated with higher mental health risk, while stable routines and social interaction suggest lower risk.
Overall, the study concludes that digital behavioral monitoring combined with explainable AI can support early detection and intervention in mental health care. While not a replacement for professionals, such systems can act as supportive tools to enhance clinical decision-making, provided ethical safeguards and human oversight are maintained.
Conclusion
Explainable AI applied to digital behavioral signals holds promise for early mental health detection. We have reviewed recent studies showing how smartphone and social media data can be used to predict depression, anxiety, and related conditions[4][23]. By using methods like SHAP and LIME, these models can highlight why a certain behavior (late night phone use, negative posts) indicated elevated risk[23][17]. For real-world impact, systems must be developed with ethical safeguards (consent, privacy, fairness) and used to support clinicians, not supplant them.
Practical Recommendations: Future projects should:
1) Use diverse data: include participants of different ages, ethnicities, and tech access.
2) Priorities consent: make data use transparent to users; allow easy opt-out.
3) Incorporate explainability: always pair any risk score with a human-readable rationale[23].
4) Involve stakeholders: work with clinicians, patients, and ethicists from the start.
If done carefully, explainable digital phenotyping can augment traditional care, helping healthcare systems move towards proactive, preventive mental health support.
References
[1] [11] A Comprehensive Survey of Datasets for Clinical Mental Health AI Systems https://arxiv.org/html/2508.09809v1
[2] [7] [8] [13] The State of Digital Biomarkers in Mental Health - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC11584197/
[3] [9] [28] [29] JMIR Mental Health - Using Smartphone-Tracked Behavioral Markers to Recognize Depression and Anxiety Symptoms: Cross-Sectional Digital Phenotyping Study https://mental.jmir.org/2026/1/e80765
[4] Predicting Depression From Smartphone Behavioral Markers Using Machine Learning Methods, Hyperparameter Optimization, and Feature Importance Analysis: Exploratory Study - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC8314163/
[5] JMIR mHealth and uHealth - Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data https://mhealth.jmir.org/2019/7/e13209/
[6] [12] [17] [19] Explainable AI-driven depression detection from social media using natural language processing and black box machine learning models - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC12460309/
[7] [15] [30] Datasets of Smartphone Modalities for Depression Assessment: A Scoping Review – PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC12710863/
[8] Network-based artificial intelligence in mental healthcare: A systematic review of chatbots, artificial intelligence/machine learning models and ethical considerations in global healthcare networks - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC12901950/
[9] [22] [23] [24] [25] [26] [31] [32] [35] [36] Explainable machine learning for mental health prediction from social media behavior: a nested cross-validation study with SHAP and LIME interpretability - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC12909650/
[10] Explainable artificial intelligence for mental health through transparency and interpretability for understandability | npj Digital Medicine https://www.nature.com/articles/s41746-023-00751-9?error=cookies_not_supported&code=4abc3c90-bc28-4254-a815-d93a7d90be14
[11] [21] Frontiers | Toward explainable AI (XAI) for mental health detection based on language behavior https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2023.1219479/full
[12] Frontiers | Smartphone sensor-based depression detection in campus environments: a proof-of-concept study with small-sample behavioral analysis https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2025.1468334/full
[13] Dreaddit: A Reddit Dataset for Stress Analysis in Social Media - ACL Anthology https://aclanthology.org/D19-6213/
[14] DAIC-WOZ Database https://dcapswoz.ict.usc.edu/